Search Results: "wart"

26 April 2015

Erich Schubert: Your big data toolchain is a big security risk!

This post is a follow-up to my earlier post on the "sad state of sysadmin in the age of containers". While I was drafting this post, that story got picked up by HackerNews, Reddit and Twitter, sending a lot of comments and emails my way. Surprisingly many of the comments are supportive of my impression - I would have expected to see much more insults along the lines "you just don't like my-favorite-tool, so you rant against using it". But a lot of people seem to share my concerns. Thanks, you surprised me!
Here is the new rant post, in the slightly different context of big data:

Everybody is doing "big data" these days. Or at least, pretending to do so to upper management. A lot of the time, there is no big data. People do more data anylsis than before, and therefore stick the "big data" label on them to promote themselves and get green light from management, isn't it?
"Big data" is not a technical term. It is a business term, referring to any attempt to get more value out of your business by analyzing data you did not use before. From this point of view, most of such projects are indeed "big data" as in "data-driven revenue generation" projects. It may be unsatisfactory to those interested in the challenges of volume and the other "V's", but this is the reality how the term is used.
But even in those cases where the volume and complexity of the data would warrant the use of all the new toys tools, people overlook a major problem: security of their systems and of their data.

The currently offered "big data technology stack" is all but secure. Sure, companies try to earn money with security add-ons such as Kerberos authentication to sell multi-tenancy, and with offering their version of Hadoop (their "Hadoop distribution").
The security problem is deep inside the "stack". It comes from the way this world ticks: the world of people that constantly follow the latest tool-of-the-day. In many of the projects, you no longer have mostly Linux developers that co-function as system administrators, but you see a lot of Apple iFanboys now. They live in a world where technology is outdated after half a year, so you will not need to support product longer than that. They love reinstalling their development environment frequently - because each time, they get to change something. They also live in a world where you would simply get a new model if your machine breaks down at some point. (Note that this will not work well for your big data project, restarting it from scratch every half year...)
And while Mac users have recently been surprisingly unaffected by various attacks (and unconcerned about e.g. GoToFail, or the fail to fix the rootpipe exploit) the operating system is not considered to be very secure. Combining this with users who do not care is an explosive mixture...
This type of developer, who is good at getting a prototype website for a startup kicking in a short amount of time, rolling out new features every day to beta test on the live users is what currently makes the Dotcom 2.0 bubble grow. It's also this type of user that mainstream products aim at - he has already forgotten what was half a year ago, but is looking for the next tech product to announced soon, and willing to buy it as soon as it is available...
This attitude causes a problem at the very heart of the stack: in the way packages are built, upgrades (and safety updates) are handled etc. - nobody is interested in consistency or reproducability anymore.
Someone commented on my blog that all these tools "seem to be written by 20 year old" kids. He probably is right. It wouldn't be so bad if we had some experienced sysadmins with a cluebat around. People that have experience on how to build systems that can be maintained for 10 years, and securely deployed automatically, instead of relying on puppet hacks, wget and unzipping of unsigned binary code.
I know that a lot of people don't want to hear this, but:
Your Hadoop system contains unsigned binary code in a number of places, that people downloaded, uploaded and redownloaded a countless number of times. There is no guarantee that .jar ever was what people think it is.
Hadoop has a huge set of dependencies, and little of this has been seriously audited for security - and in particular not in a way that would allow you to check that your binaries are built from this audited code anyway.
There might be functionality hidden in the code that just sits there and waits for a system with a hostname somewhat like "yourcompany.com" to start looking for its command and control server to steal some key data from your company. The way your systems are built they probably do not have much of a firewall guarding against such. Much of the software may be constantly calling home, and your DevOps would not notice (nor would they care, anyway).
The mentality of "big data stacks" these days is that of Windows Shareware in the 90s. People downloading random binaries from the Internet, not adequately checked for security (ever heard of anybody running an AntiVirus on his Hadoop cluster?) and installing them everywhere.
And worse: not even keeping track of what they installed over time, or how. Because the tools change every year. But what if that developer leaves? You may never be able to get his stuff running properly again!
Fire-and-forget.
I predict that within the next 5 years, we will have a number of security incidents in various major companies. This is industrial espionage heaven. A lot of companies will cover it up, but some leaks will reach mass media, and there will be a major backlash against this hipster way of stringing together random components.
There is a big "Hadoop bubble" growing, that will eventually burst.
In order to get into a trustworthy state, the big data toolchain needs to:
  • Consolidate. There are too many tools for every job. There are even too many tools to manage your too many tools, and frontends for your frontends.
  • Lose weight. Every project depends on way too many other projects, each of which only contributes a tiny fragment for a very specific use case. Get rid of most dependencies!
  • Modularize. If you can't get rid of a dependency, but it is still only of interest to a small group of users, make it an optional extension module that the user only has to install if he needs this particular functionality.
  • Buildable. Make sure that everybody can build everything from scratch, without having to rely on Maven or Ivy or SBT downloading something automagically in the background. Test your builds offline, with a clean build directory, and document them! Everything must be rebuildable by any sysadmin in a reproducible way, so he can ensure a bug fix is really applied.
  • Distribute. Do not rely on binary downloads from your CDN as sole distribution channel. Instead, encourage and support alternate means of distribution, such as the proper integration in existing and trusted Linux distributions.
  • Maintain compatibility. successful big data projects will not be fire-and-forget. Eventually, they will need to go into production and then it will be necessary to run them over years. It will be necessary to migrate them to newer, larger clusters. And you must not lose all the data while doing so.
  • Sign. Code needs to be signed, end-of-story.
  • Authenticate. All downloads need to come with a way of checking the downloaded files agree with what you uploaded.
  • Integrate. The key feature that makes Linux systems so very good at servers is the all-round integrated software management. When you tell the system to update - and you have different update channels available, such as a more conservative "stable/LTS" channel, a channel that gets you the latest version after basic QA, and a channel that gives you the latest versions shortly after their upload to help with QA. It covers almost all software on your system, so it does not matter whether the security fix is in your kernel, web server, library, auxillary service, extension module, scripting language etc. - it will pull this fix and update you in no time.
Now you may argue that Hortonworks, Cloudera, Bigtop etc. already provide packages. Well ... they provide crap. They have something they call a "package", but it fails by any quality standards. Technically, a Wartburg is a car; but not one that would pass todays safety regulations...
For example, they only support Ubuntu 12.04 - a three year old Ubuntu is the latest version they support... Furthermore, these packages are roughly the same. Cloudera eventually handed over their efforts to "the community" (in other words, they gave up on doing it themselves, and hoped that someone else would clean up their mess); and Hortonworks HDP (any maybe Pivotal HD, too) is derived from these efforts, too. Much of what they do is offering some extra documentation and training for the packages they built using Bigtop with minimal effort.
The "spark" .deb packages of Bigtop, for example, are empty. They forgot to include the .jars in the package. Do I really need to give more examples of bad packaging decisions? All bigtop packages now depend on their own version of groovy - for a single script. Instead of rewriting this script in an already required language - or in a way that it would run on the distribution-provided groovy version - they decided to make yet another package, bigtop-groovy.
When I read about Hortonworks and IBM announcing their "Open Data Platform", I could not care less. As far as I can tell, they are only sticking their label on the existing tools anyway. Thus, I'm also not surprised that Cloudera and MapR do not join this rebranding effort - given the low divergence of Hadoop, who would need such a label anyway?
So why does this matter? Essentially, if anything does not work, you are currently toast. Say there is a bug in Hadoop that makes it fail to process your data. Your business is belly-up because of that, no data is processed anymore, your are vegetable. Who is going to fix it? All these "distributions" are built from the same, messy, branch. There is probably only a dozen of people around the world who have figured this out well enough to be able to fully build this toolchain. Apparently, none of the "Hadoop" companies are able to support a newer Ubuntu than 2012.04 - are you sure they have really understood what they are selling? I have doubts. All the freelancers out there, they know how to download and use Hadoop. But can they get that business-critical bug fix into the toolchain to get you up and running again? This is much worse than with Linux distributions. They have build daemons - servers that continuously check they can compile all the software that is there. You need to type two well-documented lines to rebuild a typical Linux package from scratch on your workstation - any experienced developer can follow the manual, and get a fix into the package. There are even people who try to recompile complete distributions with a different compiler to discover compatibility issues early that may arise in the future.
In other words, the "Hadoop distribution" they are selling you is not code they compiled themselves. It is mostly .jar files they downloaded from unsigned, unencrypted, unverified sources on the internet. They have no idea how to rebuild these parts, who compiled that, and how it was built. At most, they know for the very last layer. You can figure out how to recompile the Hadoop .jar. But when doing so, your computer will download a lot of binaries. It will not warn you of that, and they are included in the Hadoop distributions, too.
As is, I can not recommend to trust your business data into Hadoop.
It is probably okay to copy the data into HDFS and play with it - in particular if you keep your cluster and development machines isolated with strong firewalls - but be prepared to toss everything and restart from scratch. It's not ready yet for prime time, and as they keep on adding more and more unneeded cruft, it does not look like it will be ready anytime soon.

One more examples of the immaturity of the toolchain:
The scala package from scala-lang.org cannot be cleanly installed as an upgrade to the old scala package that already exists in Ubuntu and Debian (and the distributions seem to have given up on compiling a newer Scala due to a stupid Catch-22 build process, making it very hacky to bootstrap scala and sbt compilation).
And the "upstream" package also cannot be easily fixed, because it is not built with standard packaging tools, but with an automagic sbt helper that lacks important functionality (in particular, access to the Replaces: field, or even cleaner: a way of splitting the package properly into components) instead - obviously written by someone with 0 experience in packaging for Ubuntu or Debian; and instead of using the proven tools, he decided to hack some wrapper that tries to automatically do things the wrong way...

I'm convinced that most "big data" projects will turn out to be a miserable failure. Either due to overmanagement or undermanagement, and due to lack of experience with the data, tools, and project management... Except that - of course - nobody will be willing to admit these failures. Since all these projects are political projects, they by definition must be successful, even if they never go into production, and never earn a single dollar.

13 March 2015

Dirk Eddelbuettel: Why Drat? A Guest Post by Steven Pav

Editorial Note: The following post was kindly contributed by Steven Pav.

Why Drat? After playing around with drat for a few days now, my impressions of it are best captured by Dirk's quote:
It just works.

Demo To get some idea of what I mean by this, suppose you are a happy consumer of R packages, but want access to, say, the latest, greatest releases of my distribution package, sadist. You can simply add the following to your .Rprofile file:
drat::add("shabbychef")
After this, you instantly have access to new releases in the github/shabbychef drat store via the package tools you already know and tolerate. You can use
install.package('sadists')
to install the sadists package from the drat store, for example. Similarly, if you issue
update.packages(ask=FALSE)
all the drat stores you have added will be checked for package updates, along with their dependencies which may well come from other repositories including CRAN.

Use cases The most obvious use cases are:
  1. Micro releases. For package authors, this provides a means to get feedback from the early adopters, but also allows one to push small changes and bug fixes without burning through your CRAN karma (if you have any left). My personal drat store tends to be a few minor releases ahead of my CRAN releases.
  2. Local repositories. In my professional life, I write and maintain proprietary packages. Pushing package updates used to involve saving the package .tar.gz to a NAS, then calling something like R CMD INSTALL package_name_0.3.1.9001.tar.gz. This is not something I wanted to ask of my colleagues. With drat, they can instead add the following stanza to .Rprofile: drat:::addRepo('localRepo','file:///mnt/NAS/r/local/drat'), and then rely on update.packages to do the rest.
I suspect that in the future, drat might be (ab)used in the following ways:
  1. Rolling your own vanilla CRAN mirror, though I suspect there are better existing ways to accomplish this.
  2. Patching CRAN. Suppose you found a bug in a package on CRAN (inconceivable!). As it stands now, you email the maintainer, and wait for a fix. Maybe the patch is trivial, but suppose it is never delivered. Now, you can simply make the patch yourself, pick a higher revision number, and stash it in your drat store. The only downside is that eventually the package maintainer might bump their revision number without pushing a fix, and you are stuck in an arms race of version numbers.
  3. Forgoing CRAN altogether. While some package maintainers might find this attractive, I think I would prefer a single huge repository, warts and all, to a landscape of a million microrepos. Perhaps some enterprising group will set up a CRAN-like drat store on github, and accept packages by pull request (whether github CDN can or will support the traffic that CRAN does is another matter), but this seems a bit too futuristic for me now.

My wish list In exchange for writing this blog post, I get to lobby Dirk for some features in drat:
  1. I shudder at the thought of hundreds of tiny drat stores. Perhaps there should be a way to aggregate addRepo commands in some way. This would allow curators to publish their suggested lists of repos.
  2. Drat stores are served in the gh-pages branch of a github repo. I wish there were some way to keep the index.html file in that directory reflect the packages present in the sources. Maybe this could be achieved with some canonical RMarkdown code that most people use.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

9 March 2015

Joey Hess: 7drl 2015 day 3 movement at last

Got the player moving in the map! And, got the map to be deadly in its own special way.
        HeadCrush -> do
                showMessage "You die."
                endThread
Even winning the game is implemented. The game has a beginning, a middle, and an end. I left the player movement mostly unconstrained, today, while I was working on things to do with the end of the game, since that makes it easier to play through and test them. Tomorrow, I will turn on fully constrained movement (an easy change), implement inventory (which is very connected to movement constraints in Scroll), and hope to start on the spell system too.
At this point, Scroll is 622 lines of code, including content. Of which, I notice, fully 119 are types and type classes. Only 4 days left! Eep! I'm very glad that scroll's central antagonist is already written. I don't plan to add other creatures, which will save some time.
Last night as I was drifting off to sleep, it came to me a way to implement my own threading system for my roguelike. Since time in a roguelike happens in discrete ticks, as the player takes each action, normal OS threads are not suitable. And in my case, I'm doing everything in pure code anyway and certianly cannot fork off a thread for some background job. But, since I'm using continuation passing style, I can just write my own fork, that takes two continuations and combines them, causing both to be run on each tick, and recursing to handle combining the resulting continuations. It was really quite simple to implement. Typechecked on the first try even!
fork :: M NextStep -> M NextStep -> M NextStep
fork job rest = do
        jn <- job
        rn <- rest
        runthread jn rn
  where
        runthread (NextStep _ (Just contjob)) (NextStep v (Just contr)) =
                return $ NextStep v $ Just $ \i -> do
                        jn <- contjob i
                        rn <- contr i
                        runthread jn rn
        runthread (NextStep _ Nothing) (NextStep v (Just contr)) =
                return $ NextStep v (Just contr)
        runthread _ (NextStep v Nothing) =
                return $ NextStep v Nothing
endThread :: M NextStep
endThread = nextStep Nothing
background :: M NextStep -> M NextStep
background job = fork job continue
demo :: M NextStep
demo = do
    showMessage "foo"
    background $ next $ const $
        clearMessage >> endThread
That has some warts, but it's good enough for my purposes, and pretty awesome for a threading system in 66 LOC.

17 February 2015

John Goerzen: Has Linux lost its way? comments prompt a Debian developer to revisit FreeBSD after 20 years

I ll admit it. I have a soft spot for FreeBSD. FreeBSD was the first Unix I ran, and it was somewhere around 20 years ago that I did so, before I switched to Debian. Even then, I still used some of the FreeBSD Handbook to learn Linux, because Debian didn t have the great Reference that it does now. Anyhow, some comments in my recent posts ( Has modern Linux lost its way? and Reactions to that, and the value of simplicity), plus a latent desire to see how ZFS fares in FreeBSD, caused me to try it out. I installed it both in VirtualBox under Debian, and in an old 64-bit Thinkpad sitting in my basement that previously ran Debian. The results? A mixture of amazing and disappointing. I will say that I am quite glad that both exist; there is plenty of innovation happening everywhere and neat features exist everywhere, too. But I can also come right out and say that the statement that FreeBSD doesn t have issues like Linux does is false and misleading. In many cases, it s running the exact same stack. In others, it s better, but there are also others where it s worse. Perhaps this article might dispell a bit of the FUD surrounding jessie, while also showing off some of the nice things FreeBSD does. My conclusion: Both jessie and FreeBSD 10.1 are awesome Free operating systems, but both have their warts. This article is more about FreeBSD than Debian, but it will discuss a few of Debian s warts as well. The experience My initial reaction to FreeBSD was: wow, this feels so familiar. It reminds me of a commercial Unix, or maybe of Linux from a few years ago. A minimal, well-documented base system, everything pretty much in logical places in the filesystem, and solid memory management. I felt right at home. It was almost reassuring, even. Putting together a FreeBSD box is a lot of package installing and config file editing. The FreeBSD Handbook, describing how to install X, talks about editing this or that file for this or that feature. I like being able to learn directly how things fit together by doing this. But then you start remembering the reasons you didn t like Linux a few years ago, or the commercial Unixes: maybe it s that programs like apache are still not as well supported, or maybe it s that the default vi has this tendency to corrupt the terminal periodically, or perhaps it s that root s default shell is csh. Or perhaps it s that I have to do a lot of package installing and config file editing. It is not quite the learning experience it once was, either; now there are things like paste this XML file into some obscure polkit location to make your mouse work or something. Overall, there are some areas where FreeBSD kills it in a way no other OS does. It is unquestionably awesome in several areas. But there are a whole bunch of areas where it s about 80% as good as Linux, a number of areas (even polkit, dbus, and hal) where it s using the exact same stack Linux is (so all these comments about FreeBSD being so differently put together strike me as hollow), and frankly some areas that need a lot of work and make it hard to manage systems in a secure and stable way. The amazing Let s get this out there: I ve used ZFS too much to use any OS that doesn t support it or something like it. Right now, I m not aware of anything like ZFS that is generally stable and doesn t cost a fortune, so pretty much: if your Unix doesn t do ZFS, I m not interested. (btrfs isn t there yet, but will be awesome when it is.) That s why I picked FreeBSD for this, rather than NetBSD or OpenBSD. ZFS on FreeBSD is simply awesome. They have integreated it extremely well. The installer supports root on zfs, even encrypted root on zfs (though neither is a default). top on a FreeBSD system shows a line of ZFS ARC (cache) stats right alongside everything else. The ZFS defaults for maximum cache size, readahead, etc. auto-tune themselves at boot (unless overridden) based on the amount of RAM in a system and the system type. Seriously, these folks have thought of everything and it just reeks of solid. I haven t seen ZFS this well integrated outside the Solaris-type OSs. I have been using ZFSOnLinux for some time now, but it is just not as mature as ZFS on FreeBSD. ZoL, for instance, still has some memory tuning issues, and is not really suggested for 32-bit machines. FreeBSD just nails it. ZFS on FreeBSD even supports TRIM, which is not available in ZoL and I think fairly unique even among OpenZFS platforms. It also supports delegated administration of the filesystem, both to users and to jails on the system, seemingly very similar to Solaris zones. FreeBSD also supports beadm, which is like a similar tool on Solaris. This lets you basically use ZFS snapshots to make lightweight boot environments , so you can select which to boot into. This is useful, say, before doing upgrades. Then there are jails. Linux has tried so hard to get this right, and fallen on its face so many times, a person just wants to take pity sometimes. We ve had linux-vserver, openvz, lxc, and still none of them match what FreeBSD jails have done for a long time. Linux s current jail-du-jour is LXC, though it is extremely difficult to configure in a secure way. Even its author comments that you won t hear any of the LXC maintainers tell you that LXC is secure and that it pretty much requires AppArmor profiles to achieve reasonable security. These are still rather in flux, as I found out last time I tried LXC a few months ago. My confidence in LXC being as secure as, say, KVM or FreeBSD is simply very low. FreeBSD s jails are simple and well-documented where LXC is complex and hard to figure out. Its security is fairly transparent and easy to control and they just work well. I do think LXC is moving in the right direction and might even get there in a couple years, but I am quite skeptical that even Docker is getting the security completely right. The simply different People have been throwing around the word distribution with respect to FreeBSD, PC-BSD, etc. in recent years. There is an analogy there, but it s not perfect. In the Linux ecosystem, there is a kernel project, a libc project, a coreutils project, a udev project, a systemd/sysvinit/whatever project, etc. You get the idea. In FreeBSD, there is a base system project. This one project covers the kernel and the base userland. Some of what they use in the base system is code pulled in from elsewhere but maintained in their tree (ssh), some is completely homegrown (kernel), etc. But in the end, they have a nicely-integrated base system that always gets upgraded in sync. In the Linux world, the distribution makers are responsible for integrating the bits from everywhere into a coherent whole. FreeBSD is something of a toolkit to build up your system. Gentoo might be an analogy in the Linux side. On the other end of the spectrum, Ubuntu is a just install it and it works, tweak later sort of setup. Debian straddles the middle ground, offering both approaches in many cases. There are pros and cons to each approach. Generally, I don t think either one is better. They are just different. The not-quite-there I said that there are a lot of things in FreeBSD that are about 80% of where Linux is. Let me touch on them here. Its laptop support leaves something to be desired. I installed it on a few-years-old Thinkpad basically the best possible platform for working suspend in a Free OS. It has worked perfectly out of the box in Debian for years. In FreeBSD, suspend only works if it s in text mode. If X is running, the video gets corrupted and the system hangs. I have not tried to debug it further, but would also note that suspend on closed lid is not automatic in FreeBSD; the somewhat obscure instuctions tell you what policykit pkla file to edit to make suspend work in XFCE. (Incidentally, it also says what policykit file to edit to make the shutdown/restart options work). Its storage subsystem also has some surprising misses. Its rough version of LVM, LUKS, and md-raid is called GEOM. GEOM, however, supports only RAID0, RAID1, and RAID3. It does not support RAID5 or RAID6 in software RAID configurations! Linux s md-raid, by comparison, supports RAID0, RAID1, RAID4, RAID5, RAID6, etc. There seems to be a highly experimental RAID5 patchset floating around for many years, but it is certainly not integrated into the latest release kernel. The current documentation makes no mention of RAID5, although it seems that a dated logical volume manager supported it. In any case, RAID5 does not seem to be well-supported in software like it is in Linux. ZFS does have its raidz1 level, which is roughly the same as RAID5. However, that requires full use of ZFS. ZFS also does not support some common operations, like adding a single disk to an existing RAID5 group (which is possible with md-raid and many other implementations.) This is a ZFS limitation on all platforms. FreeBSD s filesystem support is rather a miss. They once had support for Linux ext* filesystems using the actual Linux code, but ripped it out because it was in GPL and rewrote it so it had a BSD license. The resulting driver really only works with ext2 filesystems, as it doesn t work with ext3/ext4 in many situations. Frankly I don t see why they bothered; they now have something that is BSD-licensed but only works with a filesystem so old nobody uses it anymore. There are only two FreeBSD filesystems that are really useable: UFS2 and ZFS. Virtualization under FreeBSD is also not all that present. Although it does support the VirtualBox Open Source Edition, this is not really a full-featured or fast enough virtualization environment for a server. Its other option is bhyve, which looks to be something of a Xen clone. bhyve, however, does not support Windows guests, and requires some hoops to even boot Linux guest installers. It will be several years at least before it reaches feature-parity with where KVM is today, I suspect. One can run FreeBSD as a guest under a number of different virtualization systems, but their instructions for making the mouse work best under VirtualBox did not work. There may have been some X.Org reshuffle in FreeBSD that wasn t taken into account. The installer can be nice and fast in some situations, but one wonders a little bit about QA. I had it lock up on my twice. Turns out this is a known bug reported 2 months ago with no activity, in which the installer attempts to use a package manger that it hasn t set up yet to install optional docs. I guess the devs aren t installing the docs in testing. There is nothing like Dropbox for FreeBSD. Apparently this is because FreeBSD has nothing like Linux s inotify. The Linux Dropbox does not work in FreeBSD s Linux mode. There are sketchy reports of people getting an OwnCloud client to work, but in something more akin to rsync rather than instant-sync mode, if they get it working at all. Some run Dropbox under wine, apparently. The desktop environments tend to need a lot more configuration work to get them going than on Linux. There s a lot of editing of polkit, hal, dbus, etc. config files mentioned in various places. So, not only does FreeBSD use a lot of the same components that cause confusion in Linux, it doesn t really configure them for you as much out of the box. FreeBSD doesn t support as many platforms as Linux. FreeBSD has only two platforms that are fully supported: i386 and amd64. But you ll see people refer to a list of other platforms that are supported , but they don t have security support, official releases, or even built packages. They includ arm, ia64, powerpc, and sparc64. The bad: package management Roughly 20 years ago, this was one of the things that pulled me to Debian. Perhaps I am spolied from running the distribution that has been the gold standard for package management for so long, but I find FreeBSD s package management even pkg-ng in 10.1-RELEASE to be lacking in a number of important ways. To start with, FreeBSD actually has two different package management systems: one for the base system, and one for what they call the ports/packages collection ( ports being the way to install from source, and packages being the way to install from binaries, but both related to the same tree.) For the base system, there is freebsd-update which can install patches and major upgrades. It also has a cron option to automate this. Sadly, it has no way of automatically indicating to a calling script whether a reboot is necessary. freebsd-update really manages less than a dozen packages though. The rest are managed by pkg. And pkg, it turns out, has a number of issues. The biggest: it can take a week to get security updates. The FreeBSD handbook explains pkg audit -F which will look at your installed packages (but NOT the ones in the base system) and alert you to packages that need to be updates, similar to a stripped-down version of Debian s debsecan. I discovered this myself, when pkg audit -F showed a vulnerability in xorg, but pkg upgrade showed my system was up-to-date. It is not documented in the Handbook, but people on the mailing list explained it to me. There are workarounds, but they can be laborious. If that s not bad enough, FreeBSD has no way to automatically install security patches for things in the packages collection. Debian has several (unattended-upgrades, cron-apt, etc.) There is pkg upgrade , but it upgrades everything on the system, which may be quite a bit more than you want to be upgraded. So: if you want to run Apache with PHP, and want it to just always apply security patches, FreeBSD packages are not up to the job like Debian s are. The pkg tool doesn t have very good error-handling. In fact, its error handling seems to be nonexistent at times. I noticed that some packages had failures during install time, but pkg ignored them and marked the package as correctly installed. I only noticed there was a problem because I happened to glance at the screen at the right moment during messages about hundreds of packages. In Debian, by contrast, if there are any failures, at the end of the run, you get a nice report of which packages failed, and an exit status to use in scripts. It also has another issue that Debian resolved about a decade ago: package scripts displaying messages that are important for the administrator, but showing so many of them that they scroll off the screen and are never seen. I submitted a bug report for this one also. Some of these things just make me question the design of pkg. If I can t trust it to accurately report if the installation succeeded, or show me the important info I need to see, then to what extent can I trust it? Then there is the question of testing of the ports/packages. It seems that, automated tests aside, basically everyone is running off the master branch of the ports/packages. That s like running Debian unstable on your servers. I am distinctly uncomfortable with this notion, though it seems FreeBSD people report it mostly works well. There are some other issues, too: FreeBSD ports make no distinction between development and runtime files like Debian s packages do. So, just by virtue of wanting to run a graphical desktop, you get all of the static libraries, include files, build scripts, etc for XOrg installed. For a package as concerned about licensing as FreeBSD, the packages collection does not have separate sections like Debian s main, contrib, and non-free. It s all in one big pot: BSD-license, GPL-license, proprietary without source license. There is /usr/local/share/licenses where you can look up a license for each package, but there is no way with FreeBSD, like there is with Debian, to say never even show me packages that aren t DFSG-free. This is useful, for instance, when running in a company to make sure you never install packages that are for personal use only or something. The bad: ABI stability I m used to being able to run binaries I compiled years ago on a modern system. This is generally possible in Linux, assuming you have the correct shared libraries available. In FreeBSD, this is explicitly NOT possible. After every major version upgrade, you must reinstall or recompile every binary on your system. This is not necessarily a showstopper for me, but it is a hassle for a lot of people. Update 2015-02-17: Some people in the comments are pointing out compat packages in the ports that may help with this situation. My comment was based on advice in the FreeBSD Handbook stating After a major version upgrade, all installed packages and ports need to be upgraded . I have not directly tried this, so if the Handbook is overstating the need, then this point may be in error. Conclusions As I said above, I found little validation to the comments that the Debian ecosystem is noticeably worse than the FreeBSD one. Debian has its warts too particularly with keeping software up-to-date. You can see that the two projects are designed around a different passion: FreeBSD s around the base system, and Debian s around an integrated whole system. It would be wrong to say that either of those is always better. FreeBSD s approach clearly produces some leading features, especially jails and ZFS integration. Yet Debian s approach also produces some leading features in the way of package management and security maintainability beyond the small base. My criticism of excessive complexity in the polkit/cgmanager/dbus area still stands. But to those people commenting that FreeBSD hasn t lost its way like Linux has, I would point out that FreeBSD mostly uses these same components also, and FreeBSD has excessive complexity in its ports/package system and system management tools. I think it s a draw. You pick the best for your use case. If you re looking for a platform to run a single custom app then perhaps all of the Debian package management benefits don t apply to you (you may not even need FreeBSD s packages, or just a few). The FreeBSD ZFS support or jails may well appeal. If you re looking to run a desktop environment, or a server with some application that needs a ton of PHP, Python, Perl, or C libraries, then Debian s package management and security handling may well be attractive. I am disappointed that Debian GNU/kFreeBSD will not be a release architecture in jessie. That project had the promise to provide a best of both worlds for those that want jails or tight ZFS integration.

13 November 2014

Joey Hess: on leaving

I left Debian. I don't really have a lot to say about why, but I do want to clear one thing up right away. It's not about systemd. As far as systemd goes, I agree with my friend John Goerzen:
I promise you 18 years from now, it will not matter what init Debian chose in 2014. It will probably barely matter in 3 years.
read the rest And with Jonathan Corbet:
However things turn out, if it becomes clear that there is a better solution than systemd available, we will be able to move to it.
read the rest I have no problem with trying out a piece of Free Software, that might have abrasive authors, all kinds of technical warts, a debatable design, scope creep etc. None of that stopped me from giving Linux a try in 1995, and I'm glad I jumped in with both feet. It's important to be unafraid to make a decision, try it out, and if it doesn't work, be unafraid to iterate, rethink, or throw a bad choice out. That's how progress happens. Free Software empowers us to do this. Debian used to be a lot better at that than it is now. This seems to have less to do with the size of the project, and more to do with the project having aged, ossified, and become comfortable with increasing layers of complexity around how it makes decisions. To the point that I no longer feel I can understand the decision-making process at all ... or at least, that I'd rather be spending those scarce brain cycles on understanding something equally hard but more useful, like category theory. It's been a long time since Debian was my main focus; I feel much more useful when I'm working in a small nimble project, making fast and loose decisions and iterating on them. Recent events brought it to a head, but this is not a new feeling. I've been less and less involved in Debian since 2007, when I dropped maintaining any packages I wasn't the upstream author of, and took a year of mostly ignoring the larger project. Now I've made the shift from being a Debian developer to being an upstream author of stuff in Debian (and other distros). It seems best to make a clean break rather than hang around and risk being sucked back in. My mailbox has been amazing over the past week by the way. I've heard from so many friends, and it's been very sad but also beautiful.

30 September 2014

Gunnar Wolf: Diego G mez: Imprisoned for sharing

I got word via the Electronic Frontier Foundation about an act of injustice happening to a person for doing... Not only what I do day to day, but what I promote and believe to be right: Sharing academic articles. Diego is a Colombian, working towards his Masters degree on conservation and biodiversity in Costa Rica. He is now facing up to eight years imprisonment for... Sharing a scholarly article he did not author on Scribd. Many people lack the knowledge and skills to properly set up a venue to share their articles with people they know. Many people will hope for the best and expect academic publishers to be fundamentally good, not to send legal threats just for the simple, noncommercial act of sharing knowledge. Sharing knowledge is fundamental for science to grow, for knowledge to rise. Besides, most scholarly studies are funded by public money, and as the saying goes, they should benefit the public. And the public is everybody, is all of us. And yes, if this sounds in any way like what drove Aaron Swartz to his sad suicide early this year... It is exactly the same thing. Thankfully (although, sadly, after the sad fact), thousands of people strongly stood on Aaron's side on that demand. Please sign the EFF petition to help Diego, share this, and try to spread the word on the real world needs for Open Access mandates for academics! Some links with further information:

23 May 2014

Mike Gabriel: X2Go on FLOSS Weekly

On May 21st 2014, the two Mikes (Gabriel DePaulo) from the X2Go core developer team were interviewed about X2Go by the famous Randal L. Schwartz (merlyn) and equally famous Randi Harper (freebsdgirl) on the FLOSS Weekly Netcast [1]. If you're having trouble watching the embedded video on that page, try one of the below alternatives: HD Video [2]
SD Video, large [3]
SD Video, small [4]
Audio only [5] light+love,
Mike [1] http://twit.tv/floss295 read more

13 April 2014

Andreas Metzler: balance sheet snowboarding season 2013/14

Little snow, but above-average season. The macro weather situation was very stable this year, very high snowfall in Austria's south (eastern tyrol and carinthia), and long periods of warm and sunny weather with little precipitation on the northern side of the alps (i.e. us). This had me going snowboarding a lot, but almost exclusively in Dam ls since it is characterized by a) grassy terrain (no stones) and b) huge numbers of snow cannons. I started early (December 7) with another 6 days on piste in December. If there had been more snow the season would have been a long one, too. - Season's end depends on the timimg of easter (because of the holidays) which would have been late. However I again stopped rather early, last day was March 30. In addition to the days listed below I had an early season's opening at the glacier in Pitztal. I attended the pureboarding in November (21st to 23rd). Looking back at the season I am not quite satisfied with my progress, I just have not managed to implement and practise the technique I should have learned there. It is next to impossible when the slopes are full, and when they aren't one likes to give it a run. ;-) Here is the balance sheet:
2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14
number of (partial) days251729373030252330
Dam ls1010510162310429
Diedamskopf154242313414191
Warth/Schr cken030413100
total meters of altitude12463474096219936226774202089203918228588203562274706
highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m
# of runs309189503551462449516468597

12 April 2014

Mario Lang: Emacs Chess

Between 2001 and 2004, John Wielgley wrote emacs-chess, a rather complete Chess library for Emacs. I found it around 2004, and was immediately hooked. Why? Because Emacs is configurable, and I was hoping that I could customize the chessboard display much more than with any other console based chess program I have ever seen. And I was right. One of the four chessboard display types is exactly what I was looking for, chess-plain.el:
  
8 tSlDjLsT 
7 XxXxXxXx 
6         
5         
4         
3         
2 pPpPpPpP 
1 RnBqKbNr 
  
  abcdefgh
This might look confusing at first, but I have to admit that I grew rather fond of this way of displaying chess positions as ASCII diagrams. In this configuration, initial letters for (mostly) German chess piece names are used for the black pieces, and English chess piece names are used for the white pieces. Uppercase is used to indicate if a piece is on a black square and braille dot 7 is used to indicate an empty black square. chess-plain is completely configurable though, so you can have more classic diagrams like this as well:
  
8 rnbqkbnr 
7 pppppppp 
6  + + + + 
5 + + + +  
4  + + + + 
3 + + + +  
2 PPPPPPPP 
1 RNBQKBNR 
  
  abcdefgh
Here, upper case letters indicate white pieces, and lower case letters black pieces. Black squares are indicated with a plus sign. However, as with many Free Software projects, Emacs Chess was rather dormant in the last 10 years. For some reason that I can not even remember right now, my interest in Emacs Chess has been reignited roughly 5 weeks ago.
Universal Chess Interface It all began when I did a casual apt-cache serch for chess engines, only to discover that a number of free chess engines had been developed and packaged for Debian in the last 10 years. In 2004 there was basically only GNUChess, Phalanx and Crafty. These days, a number of UCI based chess engines have been added, like Stockfish, Glaurung, Fruit or Toga2. So I started by learning how the new chess engine communication protocol, UCI, actually works. After a bit of playing around, I had a basic engine module for Emacs Chess that could play against Stockfish. After I had developed a thin layer for all things that UCI engines have in common (chess-uci.el), it was actually very easy to implement support for Stockfish, Glaurung and Fruit in Emacs Chess. Good, three new free engines supported.
Opening books When I learnt about the UCI protocol, I discovered that most UCI engines these days do not do their own book handling. In fact, it is sort of expected from the GUI to do opening book moves. And here one thing led to another. There is quite good documentation about the Polyglot chess opening book binary format on the net. And since I absolutely love to write binary data decoders in Emacs Lisp (don't ask, I don't know why) I immediately started to write Polyglot book handling code in Emacs Lisp, see chess-polyglot.el. It turns out that it is relatively simple and actually performs very good. Even a lookup in an opening book bigger than 100 megabytes happens more or less instantaneously, so you do not notice the time required to find moves in an opening book. Binary search is just great. And binary searching binary data in Emacs Lisp is really fun :-). So Emacs Chess can now load and use polyglot opening book files. I integrated this functionality into the common UCI engine module, so Emacs Chess, when fed with a polyglot opening book, can now choose moves from that book instead of consulting the engine to calculate a move. Very neat! Note that you can create your own opening books from PGN collections, or just download a polyglot book made by someone else.
Internet Chess Servers Later I reworked the internet chess server backend of Emacs Chess a bit (sought games are now displayed with tabulated-list-mode), and found and fixed some (rather unexpected) bugs in the way how legal moves are calculated (if we take the opponents rook, their ability to castle needs to be cleared). Emacs Chess supports two of the most well known internet chess servers. The Free Internet Chess Server (FICS) and chessclub.com (ICC).
A Chess engine written in Emacs Lisp And then I rediscovered my own little chess engine implemented in Emacs Lisp. I wrote it back in 2004, but never really finished it. After I finally found a small (but important) bug in the static position evaluation function, I was motivated enough to fix my native Emacs Lisp chess engine. I implemented quiescence search so that captue combinations are actually evaluated and not just pruned at a hard limit. This made the engine quite a bit slower, but it actually results in relatively good play. Since the thinking time went up, I implemented a small progress bar so one can actually watch what the engine is doing right now. chess-ai.el is a very small Lisp impelemtnation of a chess engine. Static evaluation, alpha beta and quiescence search included. It covers the basics so to speak. So if you don't have any of the above mentioned external engines installed, you can even play a game of Chess against Emacs directly.
Other features The feature list of Emacs Chess is rather impressive. You can not just play a game of Chess against an engine, you can also play against another human (either via ICS or directly from Emacs to Emacs), view and edit PGN files, solve chess puzzles, and much much more. Emacs Chess is really a universal chess interface for Emacs.
Emacs-chess 2.0 In 2004, John and I were already planning to get emacs-chess 2.0 out the door. Well, 10 years have passed, and both of us have forgotten about this wonderful codebase. I am trying to change this. I am in development/maintainance mode for emacs-chess again. John has also promised to find a bit of time to work on a final 2.0 release. If you are an Emacs user who knows and likes to play Chess, please give emacs-chess a whirl. If you find any problems, please file an Issue on Github, or better yet, send us a Pull Requests. There is an emacs-chess Debian package which has not been updated in a while. If you want to test the new code, be sure to grab it from GitHub directly. Once we reach a state that at least feels like stable, I am going to update the Debian package of course.

31 January 2014

Andrew Pollock: [life] Day 4, Brazilian jiu jitsu, Science Friday and the Lunar New Year

I want Zoe to do one "extra curricular" activity per term this year. Something dance-related, something gymnastics-related and maybe some other form of sport (I'm thinking soccer). My girlfriend and I were wandering through Westfield Carindale on Saturday, and we happened upon a guy from Infinity Martial Arts touting his wares. I thought I'd suss it out, and they had an introductory offer where the sign-up fee and uniform fee were significantly reduced, and they had a pretty flexible timetable. They were just starting up their East Brisbane location, and the close proximity to home along with the reduced price sealed the deal. Long-term, I'd like for Zoe to learn Tae Kwon Do, for self-defense, but BJJ seemed as good as anything to get her introduced to the idea of martial arts. The class for 2-4 year olds was billed as "Fun and Fitness 4 Kids" so it's really a combination of listening to the instructor, some basic gymnastics-style stuff and a little bit of martial arts. We biked over this morning (going up Hawthorne Road is a slog) and got there in about 15 minutes via the direct route. It's in the upstairs of a gym in the middle of an industrial area, but it was pretty easily accessible by bike. They're still waiting on some of the equipment, so the space was a little spartan. It was just Zoe and I and a mother of two not quite 2 year old twin girls taking a trial class. First up, we got her uniform, and she looked so cute. There were pants and a jacket and a belt. I'm going to have to video the instructor tying up the belt next week so I can learn how to do it the right way. The class started with the kids standing on these flat coloured circles ("mushrooms") and effectively playing "Simon Says" ("instructor says") without being caught out. It was a pretty sneaky way of doing a bunch of warm up exercises like rotating the knees and ankles. Zoe did very well, but there were a few that she just point blank refused to do. Next the instructor set up a bunch of "stations" around the room. The first station involved me crouching in a fetal position with a football and Zoe had to try and tip me over to get the football. That was a load of fun. The next station was a few steps and foam-filled vinyl ramp, and Zoe just had to do a somersault down that. The next station was pretty much the same but taller, and Zoe had to do a "sausage roll" on her side down that. The next station was just a small exercise ball and Zoe had to do some "donkey kicks" on it. The final station involved me waving a couple of cut-off pool noodles at arm's length, and Zoe had to run in covering her head and give me a bear hug. We did a few rotations of these stations. It was heaps of fun. Next, the instructor got a whole bunch of ball pit balls of different colours, and scattered them over the floor, and put a basket at each end of the room. The kids were then instructed to retrieve specific colours as fast as possible. Zoe started out trying to get as many as possible in her arms before returning to the basket, but the idea was to do it one ball at a time. A lot of running back and forth. Finally, we did some actual BJJ (I think). It was called the "sleeping crocodile hold" or something like that. I had to lie on my back, and Zoe had to sneak up to me from my side and grapple me with one arm behind my head and the other around my waist and a knee in my side. I have no interest in Zoe learning mixed martial arts, but this class was so much fun. I was feeling a bit tired this morning before we headed out, but by the end of it I was so pumped. It was just the right combination of daddy/daughter rough and tumble, with a bit of gymnastics and following instruction. I'm pretty certain Zoe enjoyed it. I liked that the instructor stopped for a water break between each activity, so the kids were kept well hydrated throughout. We took the "scenic route" home, because we had no particular time constraints. It was more like 25 minutes and involved the Norman Park Greenway. I was so glad we went that way. It was a beautiful ride that I didn't know existed. Very indirect, it involved going through Woolloongabba, Coorparoo and Norman Park, around the back of Coorparoo State High School along the side of Norman Creek. It was semi-wetland conditions.The only part that was a bit annoying was where Norman Avenue met Wynnum Road. It was quite steep and the green light didn't last very long. I had wanted to do story time at Bulimba Library at 10:30am on Fridays, but I'd rather bike home via the Norman Park Greenway instead, because it's a nice ride. I've since decided that I'll just use the story time at the library during wet weather, when we'd be driving to BJJ anyway, and be able to make it to the library in time after class. I've also got to figure out where I'm going to fit doing some Science into the schedule. Fridays are going to be busy I think. I managed to get Zoe down for a nap by a bit after 12:30pm today. She was pretty knackered after the class (as was I from biking home) so I let her watch a bit of TV while I prepared lunch. She was funny, she saw how sweaty I was when we got home, and suggested I take a shower while she watch some TV. I was a bit unprepared for my first Science Friday, though. I'd been considering going to the Sir Thomas Brisbane Planetarium after BJJ, but I discovered it's closed until February 7 for maintenance and an upgrade, so that cunning plan was thwarted. I used her nap time to order 365 Science Experiments and 50 Dangerous Things (you Should Let Your Children Do). I should get plenty of inspiration out of those two. After she woke up, we did the old "vinegar and sodium bicarbonate" trick as our science experiment. I've finally got a use for my Google Labs Founders' Award lab coat. I need to try and find some child-sized safety glasses. The adult ones barely stay on her little nose. After that, we went for a walk to our local toy store to see if they sold child-sized safety glasses (they didn't) and then Zoe watched a little bit of TV and then we walked to the CityCat to go to Teneriffe to catch a bus to Chinatown. I couldn't have timed it better if I tried. Just as we got to Chinatown, they started doing a thing with the Chinese dragons, and I hoisted Zoe up on my shoulders so she could watch. I wasn't sure how she was going to take it, with all the noise from the drums and the dragons themselves, but she was enthralled. We watched other acts and then my girlfriend joined us, and we went to a Chinese restaurant for dinner. I bought Zoe these training chopsticks a while ago, and she's taken to them like a duck to water. I brought them with us so she could use them at the restaurant, and she ate the biggest dinner I've ever seen her eat. After dinner, we caught the tail end of a procession and had some ice cream. We were starting to leave, and there was another dragon, and Zoe was brave enough to go and touch it on her own, for good luck. We then made our way back to the bus. By the time the bus arrived and got us back to Teneriffe, and a CityCat finally arrived, it was very late, but Zoe was pretty good the whole time. It was probably the latest night she's ever had with me while we've been out on the go, and she only got a bit ratty once we were home. It didn't help that she'd forgotten that she'd put Cowie in a cupboard this morning and we burned some more time tracking her down. Today was a fantastic (and very full) day. I enjoyed it a lot, and I think Zoe did too. Fortunately, Fridays won't always be this full on.

29 January 2014

Benjamin Mako Hill: Aaron Swartz A Year Later

My friend Aaron Swartz died a little more than a year ago. This time last year, I was spending much of my time speaking with journalists and reading what they were writing about Aaron. Since the anniversary of his death, I have tried to take time to remember Aaron. I ve returned to the things I wrote and the things I said including this short article published last year in Red Pepper that SJ Klein and I wrote together but that I forgot to mention on my blog. I m also excited to see that a documentary film about Aaron premiered at the Sundance Film Festival last week. I was interviewed for the film but am not in it. As I said last year at a memorial for Aaron, I think about Aaron frequently and often think about my own decisions in terms of what Aaron would have done. I continued to be optimistic about the potential for Aaron-inspired action.

26 January 2014

Neil Williams: Home server rack

The uses of this particular rack I ll cover in future entries this is about how I made the rack itself, with help from friends (Steve McIntyre & Andy Simpkins). A common theme is making allowances for using dev kit boards ready-to-rack ARM servers are not here yet. My aim was to have 4, possibly 6 ARM dev kit boards running services from home, so there was no need for a standard 42U rack, a 9U rack is enough. Hence:
Wall Mounted 9U 450mm deep 19 inch rack enclosure glass door To run various other services around the house (firewall, file server etc.), a microserver was also necessary:
HP 704941-421 ProLiant Micro Server (AMD Turion II Neo N54L 2.2GHz, 2GB RAM, 250GB HDD, 2 Core, 7th Generation) I chose to mount that on a bookcase beneath the wall mounted rack as it kept all the cables at the bottom of the rack itself. The microserver needed a second gigabit network card fitted to cope with being a firewall as well, if you do the same, ensure you find a suitable card with a low profile bracket. Some are described as being low profile but do not package the low profile bracket, only a low profile card and a full height bracket.
Intel EXPI9301CTBLK PRO1000 Network Card note the low profile bracket in the pack. The first of the dev kit requirements is the lack of boards which can be racked, so shelves are going to be needed, leading on to something to stop the boards wandering across the shelf when the cables are adjusted velcro pads in my case. Second requirement is that dev kit boards are notorious for not rebooting cleanly. Nothing to do with the image being run, the board just doesn t cut power or doesn t come back after cutting power. Sometimes this is down to early revisions of the board, sometimes the board pulls enough power through the USB serial converter to remain alive, whatever the cause, it won t reboot without user involvement. So a PDU becomes necessary remotely controllable. New units tend to be expensive and/or only suitable for larger racks, I managed to find an older 8 port APC unit, something like: (Don t be surprised if that becomes a dead link search for APC Smart Slot Master Switch Power Distribution Unit). Talking of power, I m looking to use SATA drives with these boards and the boards themselves come with a variety of wall wart plugs or USB cables, so a number of IEC sockets are needed not the usual plugs:
Power cable IEC C14 plug 13A socket 25 cm or, for devices which genuinely need 2A to boot (use the 1A for attached SATA or leave empty):

Black Universal 3.1A 15W 5V Dual USB Rapid Mains Charger Plug Check the power output rating of the USB plugs used to connect to the mains as well many are 1A or less. Keep the right plug for the right board.
Power is going to also be a problem if, like me, you want to run SATA drives off boards which support SATA. The lack of a standard case means that ATX power is awkward, so check out some cheap SATA enclosures to get a SATA connector with USB power. I am using these enclosures (prices seem to have risen since I obtained them):

Startech 2.5 inch eSATA USB External Hard Drive Enclosure for SATA HDD Along with these:
eSATA to SATA Serial External Shielded Cable 1m because the iMx53 boards have SATA connectors but the enclosure exports eSATA. Whilst this might seem awkward, the merit of having both eSATA and simple USB power on one enclosure is not to be under-estimated. (Avoid the stiffer black cables space will get tight inside the rack.) Naturally, a 2.5 inch SATA drive is going to be needed for each enclosure, I m using HDD but SSD is also an option. Also, consider simple 2 or 3 way fused adaptors so that the board and the SATA drive can be powered from a single PDU port, this makes rebooting much simpler if the board needs a power supply with integrated plug instead of over USB. Now to the networking (2 x 8 port was cheaper than 1 x 16):
Netgear GS108 8-port Gigabit Ethernet Unmanaged Switch Don t forget the cat5 cables too you ll want lots of short cables, 1m or shorter inside the rack and a few longer ones going to the microserver and wall socket. I used 8x1m. Naturally, on the floor below your rack you are going to put a UPS, so the PDU power then needs to be converted to draw from the UPS via IEC plugs instead of standard mains. I decided to use a 6 gang extension 1m cable with an IEC plug it was the only bit of wiring I had to do and even those are available ready-made if you want to do it that way. Depending on the board, you may need your own serial to USB converters, you ll certainly need some powered USB hubs. I m using a wall mounted 9U rack, so I also needed a masonry drill and 4 heavy duty masonry M8 bolts. The rack comes with a mounting plate which needs to be bolted to the wall but nothing else is included. This step is much easier with someone else to take the weight of the rack as it is guided into the brackets on the mounting plate the bracket may need a little persuasion to allow for the bolt heads to not get in the way during mounting. Once mounted, the holes in the back of the rack allow for plenty of room, it s just getting to that point. The rack has side panels which latch out easily, providing easy maintenance. The glass door can be easily reversed to open from the opposite side. However, the key in the glass door is largely useless. The expectation is that the units in the rack are attached at the front but the dev boards on shelves are not going to be protected by a key in the front door. The key therefore ends up being little more than a handle for the glass door. OK. If you ve got this far, it s a case of putting things together:
Economy Cage Nut Tool 19 inch racking for cagenut extraction Yes, you really do want one. Fine, do without the premium one, but the economy one will save you a lot of (physical) pain. At this stage, it becomes clear that the normal 19 inch server rack shelves don t leave a lot of room at the back of the rack for the cables and there are a lot of cables. Each board has power, USB serial connection and network. The SATA has power too. The PDU has a power lead and you ll need network switches too. The network switches need power and an uplink cable. I positioned the supports in the rack as far forward as possible and attached the shelves to give enough room for the PDU on the base of the rack, the network switches resting on top and the extension bar (with the heavier, stiffer cables) at the back. (I need to bring my shelves another 1 or 2 positions further forward as there is barely enough room for one cable between the shelf and the back of the rack and that makes moving boards around harder than it needs to be.) The PDU defaults to enabling all ports at startup, so connect to it over telnet and turn off the ports before connecting things and sorting out the network interface to what the rest of the lab needs. (I m using a 10. range and the PDU was originally set to use 192.168.1.) That s about it as far as the hardware setup is concerned. Just time to label up each board, note down which PDU port does which device, which serial to USB converter is on which device on the microserver and check the power my initial problem with one board was traced to the inability of the board to power SATA off the on-board USB port even when using the provided 2A power supply. That meant adding a standard mains adaptor to feed both the SATA power and the board power off the one PDU port there is little point powering off the board but not the SATA, or vice versa. I haven t totalled up the expenditure but the biggest expenses were the microserver and the wall mounted rack but don t underestimate how much it will cost to buy 6 IEC plugs, various USB serial converters and how much you may spend on peripheral items. There is quite a lot of room on the 2 shelves for more boards, what will limit the deployment in this rack is space for the cables, especially power. The shorter the power cables, the easier it will be to maintain the devices in the rack. It might be worth looking at a 12U rack, if only to give plenty of space for cables. Once I ve got the software working, I ll describe what this rack will be doing it s got something to do with Debian, ARM, Linux and tests but you ve probably already guessed that much

30 September 2013

Russell Coker: Links September 2013

Matt Palmer wrote an insightful post about the use of the word professional [1]. It s one factor that makes me less inclined to be a member of professional societies. The TED blog has an interesting article about Wikihouse which is a project to create a set of free designs for houses to be cut out of plywood with a CNC milling machine [2]. The article also links to a TED talk by Alastair Parvin of the Wikihouse project which covers many interesting things other than designing houses. An XKCD comic has one of the best explanations of bullying I ve ever seen [3]. If you aren t familiar with XKCD then make sure you hover your mouse over it to read the hidden text. The Fair Phone is a project to develop a smart phone starting with conflict-free resources and with fully free software (not like a typical Android build) [4]. It s an interesting project and the price and specs seem within the normal range so you re not paying a huge premium for a conflict-free phone. Unfortunately they only have one model with a 4.3 display, if they had a competitor for the Galaxy Note then I d be interested. Patrick Stokes wrote an insightful article about why I m entitled to my opinion is a bogus argument [5]. Jim Daly wrote an interesting TED blog post interviewing Rishi Manchanda about Upstream Doctors who look for the root causes of medical problems rather than just treating the symptoms [6]. Brian Krebs wrote an insightful article about the value of a hacked email account [7]. If you are trying to convince your users to use better passwords then this should help. Ron Garrett wrote an insightful series or articles on morality hooked on the premise of whether it s wrong to torture kittens [8]. Part of his conclusion is that people who believe it s wrong to do such things tend to be more capable of working in large groups and forming a productive and efficient society. The TED blog has an interesting post by Karen Eng summarising Andreas Raptopoulos talk about using autonomous drones to deliver parcels in parts of the world that don t have usable roads [9]. Delivering parcels (which would start with medical supplies but would presumably move on to commercial transport) by drone is apparently really cheap. Being cheaper than building roads isn t going to be difficult but it seems that they are going to make it cheaper than paying people to deliver parcels even if the roads were built. The main web site about this project is www.matternet.us, they are hiring electrical engineers. Here is the link for Andreas TED talk [10]. The TOR blog has an interesting article by Emily Asher-Perrin comparing the different houses of Hogwarts [11]. It s an insightful article about personality attributes and gives more information than is available in the movies (I d read the books if I had time).

12 September 2013

Russ Allbery: AFS::PAG 1.00

One of the very long-standing problems with AFS is that it doesn't have a stable library interface or API. The situation with shared libraries is even worse. Despite that, there is a native XS AFS Perl module that tries to provide a native Perl interface to the various AFS libraries, despite all of their quirks. We've used that off and on for various things over the years. Unfortunately, the historic AFS code used an LWP (lightweight threading) implementation that relied on hairy C internals, and when Perl switched over to being threaded with POSIX pthreads by default, the loadable module started having trouble. Various people managed to keep it limping along through Debian's squeeze release, but as of wheezy it just no longer worked; the thread mismatches and the mismatches between the various AFS libraries were too severe. There's work in progress to fix this by building yet another profile of the AFS libraries, with all their warts, but in the meantime we needed access to the AFS PAG functions for various backend scripts. Everything else the Perl module did could be replaced by running the regular commands and parsing their output, but setpag has to be called in the process that should be affected to work safely. Hence this module. AFS::PAG provides a native Perl interface to the API exposed by libkafs and libkopenafs for PAG manipulation: hasafs, haspag, setpag, and unlog. (Eventually it might provide access to pioctl, but I haven't done that work yet.) It supports any platform that has a libkafs or libkopenafs, or Linux without either (by implementing the pioctl interface itself). You can get the latest version from the AFS::PAG distribution page.

30 June 2013

Russell Coker: Links June 2013

Cory Doctorow published a letter from a 14yo who had just read his novel Homeland [1]. I haven t had anything insightful to say about Aaron Swartz, so I think that this link will do [2]. Seth Godin gave an interesting TED talk about leading tribes [3]. I think everyone who is active in the FOSS community should watch this talk. Ron Garrett wrote an interesting post about the risk of being hit by a dinosaur killer [4]. We really need to do something about this and the cost of defending against asteroids is almost nothing compared to defence spending. Afra Raymond gave an interesting TED talk about corruption [5]. He focussed on his country Trinidad and Tobago but the lessons apply everywhere. Wikihouse is an interesting project that is based around sharing designs for houses that can be implemented using CNC milling machines [6]. It seems to be at the early stages but it has a lot of potential to change the building industry. Here is a TED blog post summarising Dan Pallotta s TED talk about fundraising for nonprofits [7]. His key point is that moral objections to advertising for charities significantly reduce their ability to raise funds and impacts the charitable mission. I don t entirely agree with his talk which is very positive towards spending on promotion but I think that he makes some good points which people should consider. Here is a TED blog post summarising Peter Singer s TED talk about effective altruism [8]. His focus seems to be on ways of cheaply making a significant difference which doesn t seem to agree with Dan Pallotta s ideas. Patton Oswalt wrote an insightful article about the culture of stand-up comedians which starts with joke stealing and heckling and ends with the issue of rape jokes [9]. Karen Eng wrote an interesting TED blog post about Anthony Vipin s invention of HAPTIC shoes for blind people [10]. The vibration of the shoes tells the person which way to walk and a computer sees obstacles that need to be avoided. David Blaine gave an interesting TED talk about how he prepared for a stunt of holding his breath for 17 minutes [11].

14 April 2013

Andreas Metzler: balance sheet snowboarding season 2012/13

All in a below average season. Although we had lots of snow in December, my first day on-piste was December 22nd. Riding in snowstorm or thick fog is just not my kind of thing. The christmas holiday season was absurdely warm, getting rid of most of the snow again. I managed 7 snow-days from December 22nd to January 1st, but this was more sport than fun. Really sunny days were rare in the whole winter. Due to minor injury and minor illness I had to take long breaks from snowboarding (just two snow days in the periode from January 8th to February 15th!). Early easter cut the season short. This year I ended up in skiline.cc's top-100 list for both most meters of altitude in a single day and the whole season which shows that other people had a short season, too. On the plus side, we had enough and good snow. This is also evident from the balance sheet below, I almost always went to Diedamskopf where there is almost no artificial snow. Here is the balance sheet:
2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13
number of (partial) days2517293730302523
Dam ls10105101623104
Diedamskopf15424231341419
Warth/Schr cken03041310
total meters of altitude12463474096219936226774202089203918228588203562
highscore10247m8321m12108m11272m11888m10976m13076m13885m
# of runs309189503551462449516468

1 April 2013

Russ Allbery: Review: Echo

Review: Echo, by Jack McDevitt
Series: Alex Benedict #5
Publisher: Ace
Copyright: November 2010
Printing: November 2011
ISBN: 1-937007-00-6
Format: Mass market
Pages: 367
This is the fifth book in the ongoing Alex Benedict series, but reading the previous volumes is generally optional. While there are references to earlier events (and indeed I resorted to Wikipedia at one point to refresh my memory of the plot of the the first book in the series), they aren't critical to the plot. They mostly are background for the reactions of other characters to Benedict and his firm. Echo opens with the now-familiar prologue, giving us a fleeting and limited glimpse into the mystery that Benedict and Chase Kolpath will be pursuing for the rest of the book. This time, it features the explorer Somerest Tuttle, who has spent most of his life looking for intelligent alien life. (In the far future setting of these books, humanity has spread through a large region of the galaxy but has only encountered one other alien race.) In the twilight of his career and life, a friend who works as a tour guide for astronomical tours comes to him with a story about something she saw. What, we don't know, and won't for most of the book. But, whatever it is, Tuttle never goes public with it, which is directly contrary to what everyone would expect him to do with any news about alien life. Benedict and Kolpath get involved some years later, after Tuttle's death, when a stone slab with unknown markings that was part of his estate shows up on the future equivalent of Craigslist, free to the first person who will haul it away. Benedict finds that interesting enough to go pick up, but then is really hooked when someone else goes to considerable lengths to try to make the slab disappear before anyone can get a good look at it. This, slowly, leads into poking around Tuttle's life and connections, the star tour industry, and the possible origins of the slab. If you've read any of the previous books in this series, Echo is more of the same, and I mean that in a good way. I think McDevitt is at his best when writing this sort of cross between scientific puzzle, mystery, and non-military adventure, full of people who care deeply about knowing and understanding things and don't like to be thwarted in their efforts. There is McDevitt's trademark halting and red-herring-ridden investigation that succeeds in being engrossing and compelling, plenty of opportunities to guess what's really going on (a bit easier this book than in some others), and a bit of action that I thought was better-handled than the somewhat strained action scenes of The Devil's Eye. The book moves right along, despite multiple reversals in the progress of the investigation, and the final revelation does indeed justify the strange behavior of the various people Benedict encounters. As with McDevitt in general and this series in particular, there is one caveat: the astronomical details are often interesting, but the world building in general is not terribly believable. Despite supposedly taking place millennia into the future, the society is a straight transplant (projection would imply more change than exists) of middle-class American culture, with bonus air cars, space ships, and slightly better computers. Similarly, alien ecologies suffer from severe Star Trek problems: all the planets feel like copies of Earth with a few minor tweaks. This just goes with the territory, and if you're up to the fifth book of the series, you've probably gotten used to it, but the unrealistic lack of divergence is actually used as a plot point here. That strained my suspension of disbelief a bit. That said, this sort of extremely familiar social context does give the books an oddly calm, comfortable feeling. It's tempting to describe McDevitt as cozy science fiction. There's a bit of danger, a bit of suspense, but one is fairly certain that everything will work out in the end and the reader will leave the book knowing the solutions to most of the mysteries raised without having to work very hard at understanding the society or the puzzle. If that's what you're in the mood for (and I was when I read Echo), McDevitt delivers reliably. If you like the Alex Benedict series in general, recommended, as this is an excellent example of the type. Rating: 8 out of 10

15 March 2013

Benjamin Mako Hill: Aaron Swartz MIT Memorial

On Tuesday, there was a memorial for Aaron Swartz held at the MIT Media Lab. Unfortunately, I am traveling this week and was unable to attend. As I wrote recently, I was close to Aaron. I am also, more obviously, close to MIT and the lab. It was important to me to participate in the memorial and I found a way to give a short talk with a video. I think the lab plans to post a recording of the whole event but I have put the video of my own remarks below (and online in WebM). If you prefer, you can also read the text of the remarks. <iframe allowfullscreen="" frameborder="0" height="315" src="http://www.youtube.com/embed/uYGoAJdE8jg" width="420"></iframe>

7 February 2013

Russell Coker: Links February 2013

Aaron on Software wrote an interesting series of blog posts about psychology and personal development collectively Titled Raw Nerve , here s a link to part 2 [1]. The best sections IMHO are 2, 3, and 7. The Atlantic has an insightful article by Thomas E. Ricks about the failures in leadership in the US military that made the problems in Afghanistan and Iraq a lot worse than they needed to be [2] Kent Larson gave an interesting TED talk about how to fit more people in cities [3]. He covers issues of power use, transport, space use, and sharing. I particularly liked the apartments that transform and the design for autonomous vehicles that make eye contact with pedestrians. Andrew McAfee gave an interesting TED talk titled Are Droids Taking Our Jobs [4]. I don t think he adequately supported his conclusion that computers and robots are making things better for everyone (he also presented evidence that things are getting worse for many people), but it was an interesting talk anyway. I Psychopath is an interesting documentary about Sam Vaknin who is the world s most famous narcissist [5]. The entire documentary is available from Youtube and it s really worth watching. The movie Toy Story has been recreated in live action by a couple of teenagers [6]. That s a huge amount of work. Rory Stewart gave an interesting TED talk about how to rebuild democracy [7]. I think that his arguments against using the consequences to argue for democracy and freedom (he suggests not using the torture doesn t work and women s equality doubles the workforce arguments) are weak, but he made interesting points all through his talk. Ernesto Sirolli gave an interesting TED talk about aid work and development work which had a theme of Want to help someone? Shut up and listen! [8]. That made me think of Mary Gardiner s much quoted line from the comments section of her Wikimania talk which was also shut up and listen . Waterloo Labs has some really good engineering Youtube videos [9]. The real life Mario Kart game has just gone viral but there are lots of other good things like the iPhone controlled car and eye controlled Mario Brothers. Robin Chase of Zipcar gave an interesting TED talk about various car sharing systems (Zipcar among others), congestion taxes, the environmental damage that s caused by cars, mesh networks, and other things [10]. She has a vision of a future where most cars are shared and act as nodes in a giant mesh network. Madeleine Albright gave an interesting TED talk about being a female diplomat [11]. She s an amazing speaker. Ron Englash gave an interesting TED talk about the traditional African use of fractals [12]. Among the many interesting anecdotes concerning his research in Africa he was initiated as a priest after explaining Georg Cantor s set theories. Racialicious has an insightful article about the low expectations that members of marginalised groups have of members of the privileged groups [13]. Rick Falkvinge has a radical proposal for reforming copyrights with a declared value system [14]. I don t think that this will ever get legislative support, but if it did I think it would work well for books and songs. I think that some thought should be given to how this would work for Blogs and other sources of periodical content. Obviously filing for every blog post would be an unreasonable burden. Maybe aggregating a year of posts into one copyright assignment block would work. Scott Fraser gave an interesting TED talk about the problem with eyewitness testimony [15]. He gave a real-world example of what had to be done to get an innocent man acquitted, it s quite amazing. Sarah Kendzior wrote an interesting article for al Jazeera about the common practice in American universities to pay Adjunct Professors wages that are below the poverty line [16]. That s just crazy, when students pay record tuition fees there s more than enough money to pay academics decent wages, where does all the money go to anyway?

6 February 2013

Biella Coleman: Edward Tufte was a phreak

It has been so very long since I have left a trace here. I guess moving to two new countries (Canada and Quebec), starting a new job, working on Anonymous, and finishing my first book was a bit much. I miss this space, not so much because what I write here is any good. But it a handy way for me to keep track of time and what I do and even think. My life feels like a blur at times and hopefully here I can see its rhythms and changes a little more clearly if I occasionally jot things down here. So I thought it would nice to start with something that I found surprising: famed information designer, Edward Tufte, a professor emeritus at Yale was a phone phreak (and there is a stellar new book on the topic by former phreak Phil Lapsley. He spoke about his technological exploration during a sad event, a memorial service in NYC which I attended for the hacker and activist Aaron Swartz. I had my wonderful RA transcribe the speech, so here it is [we may not have the right spelling for some of the individuals so please let us know of any mistakes]:
Edward Tufte s Speech From Aaron Swartz s Memorial
Speech starts 41:00 [video cuts out in beginning]
We would then meet over the years for a long talk every now and then, and my responsibility was to provide him with a reading list, a reading list for life and then about two years ago Quinn had Aaron come to Connecticut and he told me about the four and a half million downloads of scholarly articles and my first question is, Why isn t MIT celebrating this? .
[Video cuts out again]
Obviously helpful in my career there, he then became president of the Mellon foundation, he then retired from the Mellon foundation, but he was asked by the Mellon foundation to handle the problem of JSTOR and Aaron. So I wrote Bill Bullen(sp?) an email about it, I said first that Aaron was a treasure and then I told a personal story about how I had done some illegal hacking and been caught at it and what happened. In 1962, my housemate and I invented the first blue box, that s a device that allows for free, undetectable, unbillable long distance telephone calls. And we got this up and played around with it and the end of our research came when we concluded what was the longest long distance call ever made, which was from Palo Alto to New York time-of-day via Hawaii, well during our experimentation, AT&T, on the second day it turned out, had tapped our phone and uh but it wasn t until about 6 months later when I got a call from the gentleman, AJ Dodge, senior security person at AT&T and I said, I know what you re calling about. and so we met and he said You what you are doing is a crime that would , you know all that. But I knew it wasn t serious because he actually cared about the kind of engineering stuff and complained that the tone signals we were generating were not the standard because they record them and play them back in the network to see what numbers they we were that you were trying to reach, but they couldn t break though the noise of our signal. The upshot of it was that uh oh and he asked why we went off the air after about 3 months, because this was to make long distance telephone calls for free and I said this was because we regarded it as an engineering problem and we made the longest long distance call and so that was it. So the deal was, as I explained in my email to Bill Bullen, that we wouldn t try to sell this and we were told, I was told that crime significance would pay a great deal for this, we wouldn t do any more of it and that we would turn our equipment over to AT&T, and so they got a complete vacuum tube isolator kit for making long distance phone calls. But I was grateful for AJ Dodge and I must say, AT&T that they decided not to wreck my life. And so I told Bill Bullen that he had a great opportunity here, to not wreck somebody s life, course he thankfully did the right thing.
Aaron s unique quality was that he was marvelously and vigorously different. There is a scarcity of that. Perhaps we can be all a little more different too.
Thank you very much.

Next.

Previous.